-
Notifications
You must be signed in to change notification settings - Fork 344
Add AWQ/SmoothQuant evaluation wrapper test for vLLM benchmark #3000
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Introduce a prototype AWQ/SmoothQuant benchmark within lm_eval (vLLM). Related Issue/PR: pytorch#2815 Test plan: A toy tokenizer model is introduced to validate `TransformerEvalWrapper`. Toy linear models are updated to be compatible with the tokenizer model. Updated unit tests (`test_awq.py` and `test_smoothquant.py`) address the above changes. Future plan: This PR doesn't address latency and consumption. We need to expand metrics for comprehensive benchmarks within AWQ/SmoothQuant.
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/3000
Note: Links to docs will display an error until the docs builds have been completed. This comment was automatically generated by Dr. CI and updates every 15 minutes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks fine, also we could just test on real models through the release scripts I think
Removed the tokenizer because too many errors made AWQ api reproducible, mostly dispatch errors; just testing real models without |
Summary:
This is a milestone for building a benchmark for AWQ/SmoothQuant within the vLLM ecosystem (#2815). For vLLM benchmark, AWQ/SmoothQuant needs a wrapper (TransformerEvalWrapper) to be compatible with vLLM. Therefore, this PR adds unit test to verify if TransformerEvalWrapper works with AWQ/SmoothQuant.
Test plan:
A toy tokenizer model is introduced to validate TransformerEvalWrapper. Toy linear models are updated to be compatible with the tokenizer model.
Future plan:
This PR only validates the wrapper for the benchmark within the vLLM ecosystem. We need to expand metrics for comprehensive AWQ/SmoothQuant benchmarks.